8 research outputs found

    RSG: Fast Learning Adaptive Skills for Quadruped Robots by Skill Graph

    Full text link
    Developing robotic intelligent systems that can adapt quickly to unseen wild situations is one of the critical challenges in pursuing autonomous robotics. Although some impressive progress has been made in walking stability and skill learning in the field of legged robots, their ability to fast adaptation is still inferior to that of animals in nature. Animals are born with massive skills needed to survive, and can quickly acquire new ones, by composing fundamental skills with limited experience. Inspired by this, we propose a novel framework, named Robot Skill Graph (RSG) for organizing massive fundamental skills of robots and dexterously reusing them for fast adaptation. Bearing a structure similar to the Knowledge Graph (KG), RSG is composed of massive dynamic behavioral skills instead of static knowledge in KG and enables discovering implicit relations that exist in be-tween of learning context and acquired skills of robots, serving as a starting point for understanding subtle patterns existing in robots' skill learning. Extensive experimental results demonstrate that RSG can provide rational skill inference upon new tasks and environments and enable quadruped robots to adapt to new scenarios and learn new skills rapidly

    A unified control framework for human-robot interaction

    No full text
    Co-existence of human and robot in the same workspace requires the robot to perform robot tasks such as trajectory tracking and also interaction tasks such as keeping a safe distance from human. According to various human-robot interaction scenarios, different interaction tasks usually require different task requirements or specifications, leading to different control strategies. Besides, due to different natures of the robot tasks and interaction tasks, different controllers may be required when the task is switched from one to another. So far, there is no theoretical framework which integrates different robot and interaction task requirements into a unified robot control strategy. In this research, a general human-robot interaction control framework is proposed for the scenario of human and robot coexisting in the same workspace. We propose a general potential energy function which can be used to derive a stable and unified controller for various robot tasks and human-robot interaction tasks. Instead of designing a particular task function formalism for each subtask requirement, various tasks can be specified at a user level through simply adjusting certain task parameters. Interactive weights are also defined to specify the interaction behaviours of robots according to different human-robot interaction applications. Specific interaction modes such as human-dominant interaction and robot-dominant interaction are given in details to demonstrate the applications of the proposed control method. We show how the control framework can be applied to existing robot control systems with velocity control or torque control mode by developing a joint velocity reference command and an adaptive controller. Typically, industrial manipulators have closed architecture control systems and do not come with external sensors. During human-robot interaction, robots are operated in an uncertain environment with presence of humans and are required to adjust the behaviours according to human's intentions. Hence, external sensors such as vision systems must be added and integrated into the robots to improve their capabilities in perception and reaction. Since different configurations and types of sensors result in different sensory transformation or Jacobian matrices and thus lead to different models, it is in general difficult for operators or users in factory to model the sensory systems and deploy the robots according to various human-robot interaction applications. In this thesis, a new learning algorithm is derived and employed in the proposed control framework to estimate the unknown kinematics such that various external sensors can be easily integrated into the proposed framework to perform interaction tasks without modeling the kinematics. In the proposed framework, the robot's behaviours during the interaction can be varied by manually adjusting the task parameters. As some of the task parameters do not correspond to any physical meaning, it may be difficult for normal non-expert users to set the task parameters according to a specific interaction task. On the other hand, it is anticipated that task specification through human's demonstrations would be one of the effective ways for robots to understand or imitate human's behaviours, especially during human-robot interactions. Therefore, a task requirement learning algorithm is proposed where the motion behaviours demonstrated by human can be acquired or learned by the robot systems in a unified way.Doctor of Philosoph

    Data-driven learning for robot control with unknown Jacobian

    No full text
    Unlike most control systems, kinematic uncertainty is present in robot control systems in addition to dynamic uncertainty. The use of different types of external sensors in various configurations also results in different sensory transformation or Jacobian matrices and thus leads to different kinematic models. Currently, there is no systematic theoretical framework in developing data-driven neural network (NN) learning and control methods for task-space tracking control of robots with unknown kinematics and dynamics. The existing NN controllers are limited to either dynamic control or kinematic control without considering the interaction between the inner control loop and the outer control loop. In this paper, a NN based data driven offline learning algorithm and an online learning controller are proposed, which are combined in a complementary way. The proposed task-space control algorithms can be implemented on robotic systems with closed control architecture by considering the interaction with the inner control loop. Theoretical analyses are presented to show the stability of the systems and experimental results are presented to illustrate the performance of the proposed learning algorithms.Agency for Science, Technology and Research (A*STAR)This work was supported by the Agency For Science, Technology and Research of Singapore (A*STAR) , under the AME Individual Research Grants 2017 (Ref. A1883c0008)

    Adaptive Robot Control for Human-dominant Interactions using a General Task Function

    No full text
    When humans and a robot manipulator are sharing the same workspace, the robot is required to interact with the humans in addition to performing the standard robot tasks. Due to the different natures of the robot tasks and interaction tasks, different controllers are required when the task is switched from one to another. However, few results have been obtained in integrating the robot task and the interaction task using one general controller. In this paper, a general task function is employed so that different task requirements can be speciļ¬ed by changing certain task parameters instead of the controller. A simple active role allocation is developed such that when the human is outside the robots workspace, the robot performs the desired robot task and when the human enters the workspace, the robot interacts with human in a way or behaviour as speciļ¬ed by the human. The stability of overall system which integrates both the robot task and interaction task is shown by using Lyapunov like analysis. Experimental results are presented to illustrate the performance of proposed controller.ASTAR (Agency for Sci., Tech. and Research, Sā€™pore)Accepted versio

    Quadrotor UAV: collision resilience behaviors

    No full text
    In this article, a safety control scheme for quadrotor is proposed to guarantee collision resilience like flying insects. The direction and magnitude of contact wrench are quantitatively analyzed subject to the compliant contact wrench model. A nonlinear disturbance observer is developed to estimate the contact wrench exerted on the quadrotor, and effective collision detection can be guaranteed based on the observer. Subsequently, a tilt-torsion decomposition-based attitude controller is developed to prioritize the correction of horizontal posture over yaw error. The attitude error is separated into roll-pitch portion and yaw portion. Reasonable roll and pitch torques can be generated by allocating a higher gain for roll-pitch portion, allowing the quadrotor to recover from collisions promptly. Simulations and flight experiments are carried out to demonstrate the effectiveness of the proposed collision resilience control scheme.This work was supported in part by the National Key Research and Development Program of China under Grant 2020YFA0711200, in part by the National Natural Science Foundation of China under Grant 62273023, Grant 61903019, and Grant 61973012, in part by the Defense Industrial Technology Development Program under Grant JCKY2020601C016, in part by the Program for Changjiang Scholars and Innovative Research Team (IRT 16R03), in part by the Key Research and Development Program of Zhejiang under Grant 2021C03158, in part by the Science and Technology Key Innovative Project of Hangzhou under Grant 20182014B06 and Grant 2022AIZD0137, Zhejiang Provincial Natural Science Foundation under Grant LQ20F030006, and in part by Zhejiang Lab under Grant 2019NB0AB08

    Dynamic Modularity Approach to Adaptive Control of Robotic Systems With Closed Architecture

    No full text
    corecore